Search Results

Search found 675 results on 27 pages for 'pk'.

Page 22/27 | < Previous Page | 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Own params to PeriodicTask run() method in Celery

    - by Alex Isayko
    Hello to all! I am writing a small Django application and I should be able to create for each model object its periodical task which will be executed with a certain interval. I'm use for this a Celery application, but i can't understand one thing: class ProcessQueryTask(PeriodicTask): run_every = timedelta(minutes=1) def run(self, query_task_pk, **kwargs): logging.info('Process celery task for QueryTask %d' % query_task_pk) task = QueryTask.objects.get(pk=query_task_pk) task.exec_task() return True Then i'm do following: >>> from tasks.tasks import ProcessQueryTask >>> result1 = ProcessQueryTask.delay(query_task_pk=1) >>> result2 = ProcessQueryTask.delay(query_task_pk=2) First call is success, but other periodical calls returning the error - TypeError: run() takes exactly 2 non-keyword arguments (1 given) in celeryd server. So, can i pass own params to PeriodicTask run() ? Thanks!

    Read the article

  • SQL compare entire rows

    - by zmaster
    In SQL server 2008 I have some huge tables (200-300+ cols). Every day we run a batch job generating a new table with timestamp appended to the name of the table. The the tables have no PK. I would like a generic way to compare 2 rows from two tables. Showing which cols having different values is sufficient, but showing the values would be perfect. Thanks a lot Thanks for the answers. I ended up writing my own C# tool to do the job - as Im not allowed to install 3rd party software in my company.

    Read the article

  • Easy way to compute how close an auto_increment is to its maximum value?

    - by David M
    So yesterday we had a table that has an auto_increment PK for a smallint that reached its maximum. We had to alter the table on an emergency basis, which is definitely not how we like to roll. Is there an easy way to report on how close each auto_increment field that we use is to its maximum? The best way I can think of is to do a SHOW CREATE TABLE statement, parse out the size of the auto-incremented column, then compare that to the AUTO_INCREMENT value for the table. On the other hand, given that the schema doesn't change very often, should I store information about the columns' maximum values and get the current AUTO_INCREMENT with SHOW TABLE STATUS?

    Read the article

  • PHP: How to get the days of the week?

    - by fwaokda
    I'm wanting to store items in my database with a DATE value for a certain day. I don't know how to get the current Monday, or Tuesday, etc. from the current week. Here is my current database setup. menuentry id int(10) PK menu_item_id int(10) FK day_of_week date message varchar(255) So I have a class setup that holds all the info then I was going to do something like this... foreach ( $menuEntryArray as $item ) { if ( $item->getDate() == [DONT KNOW WHAT TO PUT HERE] ) { // code to display menu_item information } } So I'm just unsure what to put in "[DONT KNOW WHAT TO PUT HERE]" to compare to see if the date is specified for this week's Monday, or Tuesday, etc. The foreach above runs for each day of the week - so it'll look like this... Monday Item 1 Item 2 Item 3 Tuesday Item 1 Wednesday Item 1 Item 2 ... Thanks!

    Read the article

  • Can't INSERT INTO SELECT into a table with identity column

    - by Eran Goldin
    In SQL server, I'm using a table variable and when done manipulating it I want to insert its values into a real table that has an identity column which is also the PK. The table variable I'm making has two columns; the physical table has four, the first of which is the identity column, an integer IK. The data types for the columns I want to insert are the same as the target columns' data types. INSERT INTO [dbo].[Message] ([Name], [Type]) SELECT DISTINCT [Code],[MessageType] FROM @TempTableVariable END This fails with Cannot insert duplicate key row in object 'dbo.Message' with unique index 'IX_Message_Id'. The duplicate key value is (ApplicationSelection). But when trying to insert just Values (...) it works ok. How do I get it right?

    Read the article

  • TypeError while using django Form in editing an Entry

    - by damon
    I have an Entry model which can belong to a Category.I am providing a CategoryChoicesForm sothat the user can choose from various Categorys (from a dropdown list)when an Entry is created or edited. I am having trouble with the CategoryChoicesForm while editing the Entry.It throws a TypeError.. If somebody can make out what is happening..please advise me how to correct this. int() argument must be a string or a number, not 'QueryDict' /home/Django-1.4/django/db/models/fields/__init__.py in get_prep_value, line 537 ...views.py in edit_entry category_choices_form = CategoryChoicesForm(form_data) ... ...forms.py in __init__ self.fields['categoryoption'].queryset = Category.objects.filter(creator=self.creator) Here is the form class CategoryChoicesForm(forms.Form): categoryoption = forms.ModelChoiceField( queryset = Category.objects.none(), required=False,label='Category') def __init__(self, categorycreator,*args, **kwargs): super(CategoryChoicesForm, self).__init__(*args, **kwargs) self.creator=categorycreator self.fields['categoryoption'].queryset = Category.objects.filter(creator=self.creator) The edit_entry view is as follows @login_required @transaction.commit_on_success def edit_entry(request,id,template_name,page_title): form_data = get_form_data(request) entry = get_object_or_404(Entry,pk=id,author=request.user) ... category_choices_form = CategoryChoicesForm(form_data) ...

    Read the article

  • Django store regular expression in DB which then gets evaluated on page

    - by John
    Hi, I want to store a number of url patterns in my django model which a user can provide parameters to which will create a url. For example I might store these 3 urls in my db where %s is the variable parameter provided by the user: www.thisissomewebsite.com?param=%s www.anotherurl/%s/ www.lastexample.co.uk?param1=%s&fixedparam=2 As you can see from these examples the parameter can appear anywhere in the string and not in a fixed position. I have 2 models, one holds the urls and one holds the variables: class URLPatterns(models.Model): pattern = models.CharField(max_length=255) class URLVariables(models.Model): pattern = models.ForeignKey(URLPatterns) param = models.CharField(max_length=255) What would be the best way to generate these urls by replacing the %s with the variable in the database. would it just be a simple replace on the string e.g: urlvariable = URLVariable.objects.get(pk=1) pattern = url.pattern url = pattern.replace("%s", urlvariable.param) or is there a better way? Thanks

    Read the article

  • OneToMany association updates instead of insert

    - by Shvalb
    I have an entity with one-to-many association to child entity. The child entity has 2 columns as PK and one of the column is FK to the parent table. mapping looks like this: @OneToMany(cascade = {CascadeType.ALL}, fetch = FetchType.EAGER ) @JoinColumn(name="USER_RESULT_SEQUENCES.USER_RESULT_ID", referencedColumnName="USER_RESULT_ID", unique=true, insertable=true, updatable=false) private List<UserResultSequence> sequences; I create an instance of parent and add children instances to list and then try to save it to DB. If child table is empty it inserts all children and it works perfectly. if the child table is not empty it updates existing rows! I don't know why it updates instead of inserts, any ideas why this might happen?? Thank you!

    Read the article

  • SQL Server 15MM rows, simple COUNT query. 15+ seconds?

    - by john
    We took over a website from another company after a client decided to switch. We have a table that grows by about 25k records a day, and is currently at 15MM records. The table looks something like: id (PK, int, not null) member_id (int, not null) another_id (int, not null) date (datetime, not null) SELECT COUNT(id) FROM tbl can take up to 15 seconds. A simple inner join on 'another_id' takes over 30 seconds. I can't imagine why this is taking so long. Any advice? SQL Server 2005 Express

    Read the article

  • Standardize word macro usage within the team

    - by user138010
    Hi, I have a team of 10 persons who work on word documents, they format them as per our defined guidlines. To complete the work fast we have created a macro that has been updated on all the machines. This macro corrects the font, size and formatting. How can I ensure and implement that nobody can change/replace or delete this macro from their system? In case this happens I should get an alert. Can something be done at the system or programme level? Thanks, PK

    Read the article

  • Why isn't INT more efficient than UNIQUEIDENTIFIER (according to the execution plan)?

    - by ck
    I have a parent table and child table where the columns that join them together are the UNIQUEIDENTIFIER type. The child table has a clustered index on the column that joins it to the parent table (its PK, which is also clustered). I have created a copy of both of these tables but changed the relationship columns to be INTs instead, have rebuilt the indexes so that they are essentially the same structure and can be queried in the same way. When I query for a known 20 records from the parent table, pulling in all the related records from the child tables, I get identical query costs across both, i.e. 50/50 cost for the batches. If this is true, then my giant project to change all of the tables like this appears to be pointless, other than speeding up inserts. Can anyone provide any light on the situation? EDIT: The question is not about which is more efficient, but why is the query execution plan showing both queries as having the same cost?

    Read the article

  • mysql master-master setup as a way to simply master-slave promotion

    - by Chris Go
    I'm trying to see if the following plan is viable. Goal here is to be able to do HA (uptime) and not necessarily for load -- writes are fine on one MySQL 5.5 server (with innodb) but not really possible when the database is down. Currently, I have a master-slave replication setup which works fine except it doesn't have automatic promotion (obviously). what I am planning on doing is setup master-master replication to possibly do this "automatic promotion" using Amazon Route 53 DNS Failover (Health checks). What I am trying to avoid is to NOT have to do the auto-increment trick because the "business folks" got used to the auto-incrementing PK as consecutive numbers (yeah, I know this is bad but data is from 2004). So, setup the master-master replication WITHOUT the auto-increment collision prevention bit. The primary master is db1.domain.com and secondary master is db2.domain.com In Amazon Route 53, setup DNS Failover record for db.domain.com - primary failover is db1.domain.com - with a TCP healthcheck on IP address port 3306 - secondary failover is db2.domain.com - with a TCP healthcheck on IP address port 3306 Most of the time (99%), unless tcp://db1.domain.com:3306 is dead, db1.domain.com will be served up on DNS hits to db.domain.com. In fact, hopefully this is 100%. The possible downsides of this is the loss of a primary key (collision) and I think I am OK with losing one order. We are a low data volume B2B business and can just call our client up if this occurs (like an order disappearing). Does this sound like a good plan? Then I will also run another slave replication on db1.domain.com as "master" to a slave-db1.domain.com -- not sure why, maybe for heavy SELECTs?

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • Slow INFORMATION_SCHEMA query

    - by Thomas
    We have a .NET Windows application that runs the following query on login to get some information about the database: SELECT t.TABLE_NAME, ISNULL(pk_ccu.COLUMN_NAME,'') PK, ISNULL(fk_ccu.COLUMN_NAME,'') FK FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk_tc ON pk_tc.TABLE_NAME = t.TABLE_NAME AND pk_tc.CONSTRAINT_TYPE = 'PRIMARY KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE pk_ccu ON pk_ccu.CONSTRAINT_NAME = pk_tc.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS fk_tc ON fk_tc.TABLE_NAME = t.TABLE_NAME AND fk_tc.CONSTRAINT_TYPE = 'FOREIGN KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE fk_ccu ON fk_ccu.CONSTRAINT_NAME = fk_tc.CONSTRAINT_NAME Usually this runs in a couple seconds, but on one server running SQL Server 2000, it is taking over four minutes to run. I ran it with the execution plan enabled, and the results are huge, but this part caught my eye (it won't let me post an image): http://img35.imageshack.us/i/plank.png/ I then updated the statistics on all of the tables that were mentioned in the execution plan: update statistics sysobjects update statistics syscolumns update statistics systypes update statistics master..spt_values update statistics sysreferences But that didn't help. The index tuning wizard doesn't help either, because it doesn't let me select system tables. There is nothing else running on this server, so nothing else could be slowing it down. What else can I do to diagnose or fix the problem on that server?

    Read the article

  • Gnome 3 gdm fails to start after preupgrade from fedora 14 to 15

    - by digital illusion
    I'm not able to boot fedora 15 in runlevel 5. After all services start, when the login screen should appear, gdm just show a mouse waiting cursor and keeps restarting itself. From /var/log/gdm/\:0-greeter.log Gtk-Message: Failed to load module "pk-gtk-module" /usr/bin/gnome-session: symbol lookup error: /usr/lib/gtk-3.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type /usr/libexec/gnome-setting-daemon: symbol lookup error: /usr/lib/gtk-3.0modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Where should atk_plug_get_type be defined? Edit: Here a better description of the error (system-config-network-gui:2643): Gnome-WARNING **: Accessibility: failed to find module 'libgail-gnome' which is needed to make this application accessible /usr/bin/python: symbol lookup error: /usr/lib/gtk-2.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Why there are still references to gtk2? Did preupgrade fail? Attaching upgrade log... it seems gdm was not added, but it is present in the users and groups list. May 26 11:25:52 sysimage sendmail[1076]: alias database /etc/aliases rebuilt by root May 26 11:25:52 sysimage sendmail[1076]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:46:09 sysimage useradd[1793]: failed adding user 'dbus', data deleted May 26 11:53:37 sysimage systemd-machine-id-setup[2443]: Initializing machine ID from D-Bus machine ID. May 26 11:55:28 sysimage useradd[2835]: failed adding user 'apache', data deleted May 26 11:55:38 sysimage useradd[2842]: failed adding user 'haldaemon', data deleted May 26 11:55:43 sysimage useradd[2848]: failed adding user 'smolt', data deleted May 26 11:57:32 sysimage sendmail[3032]: alias database /etc/aliases rebuilt by root May 26 11:57:32 sysimage sendmail[3032]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:57:46 sysimage groupadd[3066]: group added to /etc/group: name=cgred, GID=482 May 26 11:57:47 sysimage groupadd[3066]: group added to /etc/gshadow: name=cgred May 26 11:57:47 sysimage groupadd[3066]: new group: name=cgred, GID=482 May 26 11:58:42 sysimage useradd[3086]: failed adding user 'ntp', data deleted May 26 12:00:13 sysimage dbus: avc: received policyload notice (seqno=2) May 26 12:15:08 sysimage useradd[4950]: failed adding user 'gdm', data deleted May 26 12:24:39 sysimage dbus: avc: received policyload notice (seqno=3) May 26 12:25:24 sysimage useradd[5522]: failed adding user 'mysql', data deleted May 26 12:25:37 sysimage useradd[5533]: failed adding user 'rpcuser', data deleted May 26 12:26:31 sysimage useradd[5592]: failed adding user 'tcpdump', data deleted Any suggestions before I revert installation to F14?

    Read the article

  • Replication keeps popping up on SharePoint databases

    - by Ddono25
    My typical discovery scenario: We receive an alert that the transaction log is growing quickly. We are in Simple Recovery so I go to check it out. Log is already sized to 100GB and is at 80% capacity. I run the "Whats using my log files" script from SQL Server Central and see that Replication is enabled on the database. We do not set up replication, and I don't think Replication can be done on SharePoint content db's as Replication is not supported (requires PK on all tables). This has been occurring on random servers (about 5 so far, all within the past three weeks) and it only occurs on Content Databases. sp_removedbreplication does not always work in removing the Replication either. We have found that we need to run the sp_removedbreplication, change all db owners to SA and reset Recovery Mode to Simple to completely eradicate any vestiges of this bug. How would Replication be enabling itself? We have never set up Replication on these servers. There is no evidence of any type of Replication other than the 'log_reuse_wait_desc' from the DMV query and log growth. Any help on this ghost would be appreciated!

    Read the article

  • Should I partition my main table with 2 millions rows?

    - by domribaut
    Hi, I am a developer and would need some DBA-advices. We are starting to get performance problem with a MSSQL2005 database. The visible effects of the incidents is mainly CPU-hog on the server but operations reported that it was also draining resources from the SAN (not always). the main source of issues is for sure in some application but I am wondering if we should partition some of the main tables anyway in order to relax the I/O pressure. The base is about 60GB in one file. The main table (order) has 2.1 Million rows with a 215 colones (but none is huge). We have an integer as PK so it should be OK to define a partition function. Will we win something with partitioning? will partition indexes buy us something? Here are some more facts about the DB and the table database_name database_size unallocated space My_base 57173.06 MB 79.74 MB reserved data index_size unused 29 444 808 KB 26 577 320 KB 2 845 232 KB 22 256 KB name rows reserved data index_size unused Order 2 097 626 4 403 832 KB 2 756 064 KB 1 646 080 KB 1688 KB Thanks for any advice Dom

    Read the article

  • How to find the next generated value for a auto-increment column?

    - by Tim Büthe
    I face some trouble with IBM DB2's auto-increment columns. At first, all my columns were defined as GENERATED ALWAYS, but since I had trouble with this when using the "db2 import ..." command, I changed them to GENERATED BY DEFAULT. This is necessary, sinceI need the IDs to be consistent, because other tables reference them. So using "db2 import ... modified by identityignore ..." isn't an option. When I now import data, the IDs are inserted correctly, but everytime I do this, I have to remember to set a new start for the auto-increment column by getting the highest Id+1 and alter the column like this: SELECT MAX(mycolumn)+ 1 FROM mytable; ALTER TABLE mytable ALTER COLUMN mycolumn RESTART WITH <above_result>; If I forget this, an Insert-Statement will fail with an duplicate PK error, since the auto-increment column is the primary key. So my question is: Is there a way to find the next value for an auto-increment column, so I could write Statements that would check, if this value is less then the SELECT MAX and needs to be set? Or: Isn't this whole thing as complicated as it seems to me? Could I somehow import data, preserving the IDs and have the auto-increment column still working as expected?

    Read the article

  • Moving Images from Database to File System

    - by msarchet
    So currently in our system we have been storing image files in the database (SQL Express 2005). Unfortunately it wasn't perceived that this would reach the max database size allowed by the SQL Express License. So I have proposed a plan of storing the images in the file system and only indexing where the file is in the database. The plan is to save the root path in our OptionsTable as something like ImagesRoot and then only saving the actual imageID in the table, which is basically a FK from the PK of the record with the image. I have determined that it would be best to then split this down into sub-directories inside of the ImagesRoot based on every 1000 images so basically (ImagedID / 1000)\(ImageID % 1000) (e.g. ImageID is 1999 it would be in %ImageRoot%\1\999). I'm looking for any potential pitfalls of this system and any thing that could be improved as I am already receiving some resistance from the owner of the company who wants everything to be in databases. Along those lines I would also take reasons why it should all be in databases. I should mention we have in place already automated backups that run for all of our customers databases and any files that are generated by our program that are required to be saved over a period of time These are optional but if someone isn't using our system it is expected that they are using their own or data loss isn't our problem (it is if our system fails and they are using it!). Thanks

    Read the article

  • HTTP Builder/Groovy - lost 302 (redirect) handling?

    - by Misha Koshelev
    Dear All: I am reading here http://groovy.codehaus.org/modules/http-builder/doc/handlers.html "In cases where a response sends a redirect status code, this is handled internally by Apache HttpClient, which by default will simply follow the redirect by re-sending the request to the new URL. You do not need to do anything special in order to follow 302 responses." This seems to work fine when I simply use the get() or post() methods without a closure. However, when I use a closure, I seem to lose 302 handling. Is there some way I can handle this myself? Thank you p.s. Here is my log output showing it is a 302 response [java] FINER: resp.statusLine: "HTTP/1.1 302 Found" Here is the relevant code: // Copyright (C) 2010 Misha Koshelev. All Rights Reserved. package com.mksoft.fbbday.main import groovyx.net.http.ContentType import java.util.logging.Level import java.util.logging.Logger class HTTPBuilder { def dataDirectory HTTPBuilder(dataDirectory) { this.dataDirectory=dataDirectory } // Main logic def logger=Logger.getLogger(this.class.name) def closure={resp,reader-> logger.finer("resp.statusLine: \"${resp.statusLine}\"") if (logger.isLoggable(Level.FINEST)) { def respHeadersString='Headers:'; resp.headers.each() { header->respHeadersString+="\n\t${header.name}=\"${header.value}\"" } logger.finest(respHeadersString) } def text=reader.text def lastHtml=new File("${dataDirectory}${File.separator}last.html") if (lastHtml.exists()) { lastHtml.delete() } lastHtml<<text new XmlSlurper(new org.cyberneko.html.parsers.SAXParser()).parseText(text) } def processArgs(args) { if (logger.isLoggable(Level.FINER)) { def argsString='Args:'; args.each() { arg->argsString+="\n\t${arg.key}=\"${arg.value}\"" } logger.finer(argsString) } args.contentType=groovyx.net.http.ContentType.TEXT args } // HTTPBuilder methods def httpBuilder=new groovyx.net.http.HTTPBuilder () def get(args) { httpBuilder.get(processArgs(args),closure) } def post(args) { args.contentType=groovyx.net.http.ContentType.TEXT httpBuilder.post(processArgs(args),closure) } } Here is a specific tester: #!/usr/bin/env groovy import groovyx.net.http.HTTPBuilder import groovyx.net.http.Method import static groovyx.net.http.ContentType.URLENC import java.util.logging.ConsoleHandler import java.util.logging.Level import java.util.logging.Logger // MUST ENTER VALID FACEBOOK EMAIL AND PASSWORD BELOW !!! def email='' def pass='' // Remove default loggers def logger=Logger.getLogger('') def handlers=logger.handlers handlers.each() { handler->logger.removeHandler(handler) } // Log ALL to Console logger.setLevel Level.ALL def consoleHandler=new ConsoleHandler() consoleHandler.setLevel Level.ALL logger.addHandler(consoleHandler) // Facebook - need to get main page to capture cookies def http = new HTTPBuilder() http.get(uri:'http://www.facebook.com') // Login def html=http.post(uri:'https://login.facebook.com/login.php?login_attempt=1',body:[email:email,pass:pass]) assert html==null // Why null? html=http.post(uri:'https://login.facebook.com/login.php?login_attempt=1',body:[email:email,pass:pass]) { resp,reader-> assert resp.statusLine.statusCode==302 // Shouldn't we be redirected??? // http://groovy.codehaus.org/modules/http-builder/doc/handlers.html // "In cases where a response sends a redirect status code, this is handled internally by Apache HttpClient, which by default will simply follow the redirect by re-sending the request to the new URL. You do not need to do anything special in order to follow 302 responses. " } Here are relevant logs: FINE: Receiving response: HTTP/1.1 302 Found Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << HTTP/1.1 302 Found Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Cache-Control: private, no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Expires: Sat, 01 Jan 2000 00:00:00 GMT Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Location: http://www.facebook.com/home.php? Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << P3P: CP="DSP LAW" Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Pragma: no-cache Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: datr=1275687438-9ff6ae60a89d444d0fd9917abf56e085d370277a6e9ed50c1ba79; expires=Sun, 03-Jun-2012 21:37:24 GMT; path=/; domain=.facebook.com Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: lxe=koshelev%40post.harvard.edu; expires=Tue, 28-Sep-2010 15:24:04 GMT; path=/; domain=.facebook.com; httponly Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: lxr=deleted; expires=Thu, 04-Jun-2009 21:37:23 GMT; path=/; domain=.facebook.com; httponly Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Set-Cookie: pk=183883c0a9afab1608e95d59164cc7dd; path=/; domain=.facebook.com; httponly Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Content-Type: text/html; charset=utf-8 Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << X-Cnection: close Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Date: Fri, 04 Jun 2010 21:37:24 GMT Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.DefaultClientConnection receiveResponseHeader FINE: << Content-Length: 0 Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: datr][value: 1275687438-9ff6ae60a89d444d0fd9917abf56e085d370277a6e9ed50c1ba79][domain: .facebook.com][path: /][expiry: Sun Jun 03 16:37:24 CDT 2012]". Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: lxe][value: koshelev%40post.harvard.edu][domain: .facebook.com][path: /][expiry: Tue Sep 28 10:24:04 CDT 2010]". Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: lxr][value: deleted][domain: .facebook.com][path: /][expiry: Thu Jun 04 16:37:23 CDT 2009]". Jun 4, 2010 4:37:22 PM org.apache.http.client.protocol.ResponseProcessCookies processCookies FINE: Cookie accepted: "[version: 0][name: pk][value: 183883c0a9afab1608e95d59164cc7dd][domain: .facebook.com][path: /][expiry: null]". Jun 4, 2010 4:37:22 PM org.apache.http.impl.client.DefaultRequestDirector execute FINE: Connection can be kept alive indefinitely Jun 4, 2010 4:37:22 PM groovyx.net.http.HTTPBuilder doRequest FINE: Response code: 302; found handler: post302$_run_closure2@7023d08b Jun 4, 2010 4:37:22 PM groovyx.net.http.HTTPBuilder doRequest FINEST: response handler result: null Jun 4, 2010 4:37:22 PM org.apache.http.impl.conn.SingleClientConnManager releaseConnection FINE: Releasing connection org.apache.http.impl.conn.SingleClientConnManager$ConnAdapter@605b28c9 You can see there is clearly a location argument. Thank you Misha

    Read the article

  • Deadlock in SQL Server 2005! Two real-time bulk upserts are fighting. WHY?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!! In response to the request for the XDL, here it is: <deadlock-list> <deadlock victim="processc19978"> <process-list> <process id="processaf0b68" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (d900ed5a6cc6)" waittime="718" ownerId="1102128174" transactionname="user_transaction" lasttranstarted="2010-06-11T16:30:44.750" XDES="0xffffffff817f9a40" lockMode="U" schedulerid="3" kpid="8228" status="suspended" spid="73" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-11T16:30:44.750" lastbatchcompleted="2010-06-11T16:30:44.750" clientapp=".Net SqlClient Data Provider" hostname="RISKAPPS_VM" hostpid="3836" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1102128174" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrent_BulkUpload" line="28" stmtstart="1062" stmtend="1720" sqlhandle="0x03000600a28e5e4ef4fd8e00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = getdate(), Value = t.Value, Source = @source FROM MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid WHERE c.lastUpdate &lt; @updateTime and c.mdid not in (select mdid from MarketData where BloombergTicker is not null and PriceSource like &apos;Blbg.%&apos;) and c.value &lt;&gt; t.value </frame> <frame procname="adhoc" line="1" stmtstart="88" sqlhandle="0x01000600c1653d0598706ca7000000000000000000000000"> exec MarketDataCurrent_BulkUpload @clearBefore, @source </frame> <frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000"> unknown </frame> </executionStack> <inputbuf> (@clearBefore datetime,@source nvarchar(10))exec MarketDataCurrent_BulkUpload @clearBefore, @source </inputbuf> </process> <process id="processc19978" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (74008e31572b)" waittime="718" ownerId="1102128228" transactionname="user_transaction" lasttranstarted="2010-06-11T16:30:44.780" XDES="0x380be9d8" lockMode="U" schedulerid="5" kpid="8464" status="suspended" spid="248" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-11T16:30:44.780" lastbatchcompleted="2010-06-11T16:30:44.780" clientapp=".Net SqlClient Data Provider" hostname="RISKBBG_VM" hostpid="4480" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1102128228" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrentBlbgRtUpload" line="14" stmtstart="840" stmtend="1220" sqlhandle="0x03000600005f9d24c8878f00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID </frame> <frame procname="adhoc" line="1" sqlhandle="0x010006004a58132228bf8d73000000000000000000000000"> MarketDataCurrentBlbgRtUpload </frame> </executionStack> <inputbuf> MarketDataCurrentBlbgRtUpload </inputbuf> </process> </process-list> <resource-list> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lock5ba77b00" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processc19978" mode="U"/> </owner-list> <waiter-list> <waiter id="processaf0b68" mode="U" requestType="wait"/> </waiter-list> </keylock> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lock65dca340" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processaf0b68" mode="U"/> </owner-list> <waiter-list> <waiter id="processc19978" mode="U" requestType="wait"/> </waiter-list> </keylock> </resource-list> </deadlock> </deadlock-list>

    Read the article

  • uploadify scriptData problem

    - by elpaso66
    Hi, I'm having problems with scriptData on uploadify, I'm pretty sure the config syntax is fine but whatever I do, scriptData is not passed to the upload script. I tested in both FF and Chrome with flash v. Shockwave Flash 9.0 r31 This is the config: $(document).ready(function() { $('#id_file').uploadify({ 'uploader' : '/media/filebrowser/uploadify/uploadify.swf', 'script' : '/admin/filebrowser/upload_file/', 'scriptData' : {'session_key': 'e1b552afde044bdd188ad51af40cfa8e'}, 'checkScript' : '/admin/filebrowser/check_file/', 'cancelImg' : '/media/filebrowser/uploadify/cancel.png', 'auto' : false, 'folder' : '', 'multi' : true, 'fileDesc' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'fileExt' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'sizeLimit' : 10485760, 'scriptAccess' : 'sameDomain', 'queueSizeLimit' : 50, 'simUploadLimit' : 1, 'width' : 300, 'height' : 30, 'hideButton' : false, 'wmode' : 'transparent', translations : { browseButton: 'BROWSE', error: 'An Error occured', completed: 'Completed', replaceFile: 'Do you want to replace the file', unitKb: 'KB', unitMb: 'MB' } }); $('input:submit').click(function(){ $('#id_file').uploadifyUpload(); return false; }); }); I checked that other values (file name) are passed correctly but session_key is not. This is the decorator code from django-filebrowser, you can see it checks for request.POST.get('session_key'), the problem is that request.POST is empty. def flash_login_required(function): """ Decorator to recognize a user by its session. Used for Flash-Uploading. """ def decorator(request, *args, **kwargs): try: engine = __import__(settings.SESSION_ENGINE, {}, {}, ['']) except: import django.contrib.sessions.backends.db engine = django.contrib.sessions.backends.db print request.POST session_data = engine.SessionStore(request.POST.get('session_key')) user_id = session_data['_auth_user_id'] # will return 404 if the session ID does not resolve to a valid user request.user = get_object_or_404(User, pk=user_id) return function(request, *args, **kwargs) return decorator

    Read the article

  • Django 1.2 + South 0.7 + django-annoying's AutoOneToOneField leads to TypeError: 'LegacyConnection'

    - by konrad
    I'm using Django 1.2 trunk with South 0.7 and an AutoOneToOneField copied from django-annoying. South complained that the field does not have rules defined and the new version of South no longer has an automatic field type parser. So I read the South documentation and wrote the following definition (basically an exact copy of the OneToOneField rules): rules = [ ( (AutoOneToOneField), [], { "to": ["rel.to", {}], "to_field": ["rel.field_name", {"default_attr": "rel.to._meta.pk.name"}], "related_name": ["rel.related_name", {"default": None}], "db_index": ["db_index", {"default": True}], }, ) ] from south.modelsinspector import add_introspection_rules add_introspection_rules(rules, ["^myapp"]) Now South raises the following error when I do a schemamigration. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "django/core/management/base.py", line 223, in execute output = self.handle(*args, **options) File "South-0.7-py2.6.egg/south/management/commands/schemamigration.py", line 92, in handle (k, v) for k, v in freezer.freeze_apps([migrations.app_label()]).items() File "South-0.7-py2.6.egg/south/creator/freezer.py", line 33, in freeze_apps model_defs[model_key(model)] = prep_for_freeze(model) File "South-0.7-py2.6.egg/south/creator/freezer.py", line 65, in prep_for_freeze fields = modelsinspector.get_model_fields(model, m2m=True) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 322, in get_model_fields args, kwargs = introspector(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 271, in introspector arg_defs, kwarg_defs = matching_details(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 187, in matching_details if any([isinstance(field, x) for x in classes]): TypeError: 'LegacyConnection' object is not iterable Is this related to a recent change in Django 1.2 trunk? How do I fix this? I use this field as follows: class Bar(models.Model): foo = AutoOneToOneField("foo.Foo", primary_key=True, related_name="bar") For reference the field code from django-tagging: class AutoSingleRelatedObjectDescriptor(SingleRelatedObjectDescriptor): def __get__(self, instance, instance_type=None): try: return super(AutoSingleRelatedObjectDescriptor, self).__get__(instance, instance_type) except self.related.model.DoesNotExist: obj = self.related.model(**{self.related.field.name: instance}) obj.save() return obj class AutoOneToOneField(OneToOneField): def contribute_to_related_class(self, cls, related): setattr(cls, related.get_accessor_name(), AutoSingleRelatedObjectDescriptor(related))

    Read the article

  • LINQ To SQL ignore unique constraint exception and continue

    - by Martin
    I have a single table in a database called Users Users ------ ID (PK, Identity) Username (Unique Index) I have setup a unique index on the Username table to prevent duplicates. I am then enumerating through a collection and creating a new user in the database for each item. What I want to do is just insert a new user and ignore the exception if the unique key constraint is violated (as it's clearly a duplicate record in that case). This is to avoid having to craft where not exists kind of queries. First off, is this going to be any more efficient or should my insert code be checking for duplicates instead? I'm drawn more to the database having that logic as this prevents any other type of client from inserting duplicate data. My other issue is related to LINQ To SQL. I have the following code: public class TestRepo { DatabaseDataContext database = new DatabaseDataContext(); public void Add(string username) { database.Users.InsertOnSubmit(new User() { Username = username }); } public void Save() { database.SubmitChanges(); } } And then I iterate over a collection and insert new users, ignoring any exceptions: TestRepo repo = new TestRepo(); foreach (var name in new string[] { "Tim", "Bob", "John" }) { try { repo.Add(name); repo.Save(); } catch { } } The first time this is run, great I have three users in the table. If I remove the second one and run this code again, nothing is inserted. I expected the first insert to fail with the exception, the second to succeed (as I just removed that item from the DB) and the third to then fail. What seems to be happening is that once the SqlException is thrown (even though the loop continues to iterate) all of the next inserts fail - even when there isn't a row in the table that would cause a unique violation. Can anyone explain this? P.S. The only workaround I could find was to instantiate the repo each time before the insert, then it worked exactly as excepted - indicating that it's something to do with the LINQ To SQL DataContext. Thanks.

    Read the article

  • Flex chart axis title blinking on resize

    - by Anoop
    I am having a chart with titles for horizontal and vertical axis. the verticalAxisTitleAlignment property of the vertical axis renderer is set to vertical. The application is a portal sort of thing and this chart is placed inside a small window. on click of the window am resizing the parent of the chart. but at that time the title of the axis is blinking and it provides a feeling that the resizing is not smooth. I dont understand why this happens? does the tile renders each time when the component is maximized? is there any way to stop that? Thanks in Advance, Cheers, PK This is the code am using to draw the title : <mx:horizontalAxis> <mx:LinearAxis id="haxis" title="Share" interval="5"/> </mx:horizontalAxis> <mx:verticalAxis> <mx:LinearAxis id="vaxis" title="{'Aided'+ '\n'+ ' awareness'}" interval="5"/> </mx:verticalAxis> <mx:horizontalAxisRenderers> <mx:AxisRenderer axis="{haxis}" axisStroke="{axisStroke}" tickPlacement="none"/> </mx:horizontalAxisRenderers> <mx:verticalAxisRenderers> <mx:AxisRenderer axis="{vaxis}" axisStroke="{axisStroke}" verticalAxisTitleAlignment="vertical" tickPlacement="none"/> </mx:verticalAxisRenderers>

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27  | Next Page >